Azure AI Foundry Guardrails | Episode 27
Description
đź”— Register for FREE Infosec Webcasts, Anti-casts & Summits –Â
Azure AI Foundry Guardrails | Episode 27
In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.
Topics Covered:
- Â Changing default filters for demo compliance
- Â Setting up a system prompt and understanding its role
- Â Adding regex terms to block specific content
-  Creating and configuring a custom filter: “tech demo guardrails”
- Â Input-side filtering: inspecting user text before model access
- Â Safety vs. security categories in filtering
- Â Enabling prompt shields for indirect jailbreak detection
This video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.
Why This Matters
By implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.
#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurity
Brought to you by Black Hills Information SecurityÂ
https://www.blackhillsinfosec.com
----------------------------------------------------------------------------------------------
Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
- (00:00 ) - Introduction & Overview
- (01:17 ) - Changing the Default Content Filter for Demo Compliance
- (02:00 ) - Setting Up a System Prompt and Its Purpose
- (04:26 ) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)
- (05:04 ) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”
- (05:35 ) - How Input-Side Filters Inspect and Block Unwanted Content
- (06:01 ) - Overview of Safety Categories vs. Security Categories
- (07:15 ) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)
- (08:30 ) - Summary & Next Steps























